√在线天堂中文最新版网,97se亚洲综合色区,国产成人av免费网址,国产成人av在线影院无毒,成人做爰100部片

×

markov decision process造句

"markov decision process"是什么意思   

例句與造句

  1. value iteration optimization for semi-markov decision processes
    決策過程的數(shù)值迭代優(yōu)化
  2. inventory control using adaptive decision based on markov decision processes
    自適應(yīng)決策的庫存控制
  3. continuous-time markov decision processes to call admission control problem
    決策過程在呼叫接入控制中的應(yīng)用
  4. performance optimization for countable semi-markov decision processes with discounted-cost
    決策過程折扣代價性能優(yōu)化
  5. optimal bidding strategy based on markov decision process for generation companies
    過程理論在發(fā)電商報價策略選擇中的應(yīng)用
  6. It's difficult to find markov decision process in a sentence. 用markov decision process造句挺難的
  7. research on dynamic bayesian networks in non time homogenous markov decision process
    具有丟失數(shù)據(jù)的貝葉斯網(wǎng)絡(luò)結(jié)構(gòu)學(xué)習(xí)研究
  8. discrete time markov decision process based multi-period decision problem for generation companies
    決策過程的發(fā)電公司多階段決策
  9. an energy management strategy for fuel cell hybrid electric vehicle based on markov decision process
    基于馬爾可夫決策理論的燃料電池混合動力汽車能量管理策略
  10. the optimal model of inspection and maintenance for the deteriorating system is presented with the semi-markov decision process
    摘要提出了一類基于半馬氏決策過程的劣化失效系統(tǒng)檢測與維修優(yōu)化模型。
  11. to solve the problem that the ph distribution proposed changes the state space of system, the value iteration algorithm for the semi-markov decision process is improved to get the optimal inspection and maintenance policy
    將位相型(ph)分布引入模型后,決策過程的狀態(tài)空間發(fā)生變化,為了獲得適用于原有模型假設(shè)的檢測與維修優(yōu)化策略,提出了一種改進(jìn)的值迭代算法。
  12. markov decision process, in short mdp, is also called sequential stochastic optimization stochastic optimum control . the controlled markov process or stochastic dynamic programming is the theory on stochastic sequential decision
    馬爾可夫決策過程(markovdecisionprocesses,簡稱mdp,又稱序貫隨機(jī)最優(yōu)化、隨機(jī)最優(yōu)控制、受控的馬爾可夫過程或隨機(jī)動態(tài)規(guī)劃)是研究隨機(jī)序貫決策的問題的理論。
  13. reinforcement learning algorithms that use cerebellar model articulation controller ( cmac ) are studied to estimate the optimal value function of markov decision processes ( mdps ) with continuous states and discrete actions . the state discretization for mdps using sarsa-learning algorithms based on cmac networks and direct gradient rules is analyzed . two new coding methods for cmac neural networks are proposed so that the learning efficiency of cmac-based direct gradient learning algorithms can be improved
    在求解離散行為空間markov決策過程(mdp)最優(yōu)策略的增強(qiáng)學(xué)習(xí)算法研究方面,研究了小腦模型關(guān)節(jié)控制器(cmac)在mdp行為值函數(shù)逼近中的應(yīng)用,分析了基于cmac的直接梯度算法對mdp狀態(tài)空間離散化的特點(diǎn),研究了兩種改進(jìn)的cmac編碼結(jié)構(gòu),即:非鄰接重疊編碼和變尺度編碼,以提高直接梯度學(xué)習(xí)算法的收斂速度和泛化性能。
  14. on the optimization of cbm policy for repeated single mission, the markov decision process ( mdp ) is applied, and also a numerical method is proposed in which the maintenance threshold of state is determined according to the relation between the average availability or the average cost and system states when the condition data is not periodical
    對重復(fù)單一任務(wù)的視情維修策略,應(yīng)用了馬爾可夫決策過程模型進(jìn)行求解。針對試驗(yàn)數(shù)據(jù)是非定期檢測的情況,提出了直接由數(shù)據(jù)得到優(yōu)化目標(biāo)與狀態(tài)、工齡的關(guān)系曲線,從而得到維修狀態(tài)閾值的數(shù)值方法。
  15. as an example, the parallel machine scheduling problem is mapped on a non-constrained matrix construction graph, and a aco algorithm is proposed to solve the parallel machine scheduling problem . comparison with other best-performing algorithm, the algorithm we proposed is very effective . the finite deterministic markov decision process corresponding to the solution construction procedure of aco algorithm is illustrated in the terminology of reinforcement learning ( rl ) theory
    本章最后提出了解決并行機(jī)調(diào)度問題的蟻群算法,該算法把并行機(jī)調(diào)度問題映射為無約束矩陣解構(gòu)造圖,并在算法的信息素更新過程中應(yīng)用了無約束矩陣解構(gòu)造圖的局部歸一化螞蟻種子信息素更新規(guī)則,與其他幾個高性能算法的仿真對比試驗(yàn)證明這種方法是非常有效的。

相鄰詞匯

  1. "markov clustering"造句
  2. "markov condition"造句
  3. "markov constraint"造句
  4. "markov decision"造句
  5. "markov decision problems"造句
  6. "markov decision processes"造句
  7. "markov equation"造句
  8. "markov estimate"造句
  9. "markov field"造句
  10. "markov inequality"造句
桌面版繁體版English日本語

Copyright ? 2025 WordTech Co.